Released at the turn of the 2010s, deep learning technologies (a subfamily of machine learning) are based on opaque architectures. As they automate more and more tasks, these black boxes, that are not understandable to humans, overcome the usual ethical and moral categories. Faced with these challenges, some people want to “take back control.” However, if machines (through a mirror effect) reveal the “inhuman” part of humanity, does the human/machine separation still make sense? On the other hand, embedding ethics in a computer program (“by design”) requires making them explicit and fixed—which is the opposite of a dialogical dynamic. To overcome AI challenges, we must therefore “normative ethics” (prescription) and “capability ethics” (potentiation).